Re: Memory exhausted in AllocSetReAlloc - Mailing list pgsql-general

From Eric Cholet
Subject Re: Memory exhausted in AllocSetReAlloc
Date
Msg-id 110462769.1005758052@[192.168.1.14]
Whole thread Raw
In response to Re: Memory exhausted in AllocSetReAlloc  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Memory exhausted in AllocSetReAlloc
List pgsql-general
--On mercredi 14 novembre 2001 10:37 -0500 Tom Lane <tgl@sss.pgh.pa.us>
wrote:

> Doug McNaught <doug@wireboard.com> writes:
>> Have you double checked to make sure the ulimits for postmaster are
>> large enough?
>
> More to the point, what have you got SORT_MEM set to?  We're assuming
> that this allocation request is correct, and indeed it probably is if
> you've got SORT_MEM cranked up to the moon.  But if you don't then
> there may be a different issue (like corrupted data).
>
> FWIW, running with SORT_MEM set in the hundreds of megs on a 2gig
> box does not strike me as a good idea.  Keep in mind that that number
> is *per sorting operation*.  Each backend might be trying to use that
> much simultaneously --- in fact, in complex queries it's not hard to
> cause a single backend to be executing multiple sort steps.

That's it, trying to make postgres use more memory I cranked up SORT_MEM
to a much too high value. I just reduced it back to 1024 and it's working
fine now.

Thanks to the other posters who replied, I found out FreeBSD's default
kernel limits the data segment size to 128M, so I raised that as well.

I certainly didn't understand SORT_MEM's semantics. Any pointers, hints
on how to tune postgres to make best use of all the memory I've got
would be appreciated. My largest table is going to have ~ 200 million
rows. The docs are a little terse on how to tune for that kind of volume.

Thanks,

--
Eric Cholet


pgsql-general by date:

Previous
From: "Geoffrey J. Gowey"
Date:
Subject: Re: transitioning from MS SQL Server to Postgres
Next
From: Luis Amigo
Date:
Subject: hash-join